Goto

Collaborating Authors

 domain-invariant representation





Domain Adaptation with Conditional Distribution Matching and Generalized Label Shift

Neural Information Processing Systems

Adversarial learning has demonstrated good performance in the unsupervised domain adaptation setting, by learning domain-invariant representations. However, recent work has shown limitations of this approach when label distributions differ between the source and target domains. In this paper, we propose a new assumption, \textit{generalized label shift} ($\glsa$), to improve robustness against mismatched label distributions.


Exploiting Domain-Specific Features to Enhance Domain Generalization

Neural Information Processing Systems

The domain-specific representation is optimized through the meta-learning framework to adapt from source domains, targeting a robust generalization on unseen domains. We empirically show that mDSDI provides competitive results with state-of-the-art techniques in DG.